16,835 research outputs found

    The restricted sumsets in Zn\mathbb{Z}_n

    Full text link
    Let hβ‰₯2h\geq 2 be a positive integer. For any subset AβŠ‚Zn\mathcal{A}\subset \mathbb{Z}_n, let h∧Ah^{\wedge}\mathcal{A} be the set of the elements of Zn\mathbb{Z}_n which are sums of hh distinct elements of A\mathcal{A}. In this paper, we obtain some new results on 4∧A4^{\wedge}\mathcal{A} and 5∧A5^{\wedge}\mathcal{A}. For example, we show that if ∣A∣β‰₯0.4045n|\mathcal{A}|\geq 0.4045n and nn is odd, then 4∧A=Zn4^{\wedge}\mathcal{A}=\mathbb{Z}_{n}; Under some conditions, if nn is even and ∣A∣|\mathcal{A}| is close to n/4n/4, then 4∧A=Zn4^{\wedge}\mathcal{A}=\mathbb{Z}_{n}

    GMNN: Graph Markov Neural Networks

    Full text link
    This paper studies semi-supervised object classification in relational data, which is a fundamental problem in relational data modeling. The problem has been extensively studied in the literature of both statistical relational learning (e.g. relational Markov networks) and graph neural networks (e.g. graph convolutional networks). Statistical relational learning methods can effectively model the dependency of object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for classification through end-to-end training. In this paper, we propose the Graph Markov Neural Network (GMNN) that combines the advantages of both worlds. A GMNN models the joint distribution of object labels with a conditional random field, which can be effectively trained with the variational EM algorithm. In the E-step, one graph neural network learns effective object representations for approximating the posterior distributions of object labels. In the M-step, another graph neural network is used to model the local label dependency. Experiments on object classification, link classification, and unsupervised node representation learning show that GMNN achieves state-of-the-art results.Comment: icml 201

    Optimal Control with State Constraints for Stochastic Evolution Equation with Jumps in Hilbert Space

    Full text link
    This paper studies a stochastic optimal control problem with state constraint, where the state equation is described by a controlled stochastic evolution equation with jumps in Hilbert Space and the control domain is assumed to be convex. By means of Ekland variational principle, combining the convex variation method and the duality technique, necessary conditions for optimality are derived in the form of stochastic maximum principles

    Partial Information Stochastic Differential Games for Backward Stochastic Systems Driven By L\'{e}vy Processes

    Full text link
    In this paper, we consider a partial information two-person zero-sum stochastic differential game problem where the system is governed by a backward stochastic differential equation driven by Teugels martingales associated with a L\'{e}vy process and an independent Brownian motion. One sufficient (a verification theorem) and one necessary conditions for the existence of optimal controls are proved. To illustrate the general results, a linear quadratic stochastic differential game problem is discussed

    The Limits of Error Correction with lp Decoding

    Full text link
    An unknown vector f in R^n can be recovered from corrupted measurements y = Af + e where A^(m*n)(m>n) is the coding matrix if the unknown error vector e is sparse. We investigate the relationship of the fraction of errors and the recovering ability of lp-minimization (0 < p <= 1) which returns a vector x minimizing the "lp-norm" of y - Ax. We give sharp thresholds of the fraction of errors that determine the successful recovery of f. If e is an arbitrary unknown vector, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. If e has fixed support and fixed signs on the support, the threshold is 2/3 for all p in (0, 1), while the threshold is 1 for l1-minimization.Comment: 5 pages, 1 figure. ISIT 201

    On the Performance of Sparse Recovery via L_p-minimization (0<=p <=1)

    Full text link
    It is known that a high-dimensional sparse vector x* in R^n can be recovered from low-dimensional measurements y= A^{m*n} x* (m<n) . In this paper, we investigate the recovering ability of l_p-minimization (0<=p<=1) as p varies, where l_p-minimization returns a vector with the least l_p ``norm'' among all the vectors x satisfying Ax=y. Besides analyzing the performance of strong recovery where l_p-minimization needs to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of ``weak'' recovery of l_p-minimization (0<=p<1) where the aim is to recover all the sparse vectors on one support with fixed sign pattern. When m/n goes to 1, we provide sharp thresholds of the sparsity ratio that differentiates the success and failure via l_p-minimization. For strong recovery, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly, for weak recovery, the threshold is 2/3 for all p in [0,1), while the threshold is 1 for l_1-minimization. We also explicitly demonstrate that l_p-minimization (p<1) can return a denser solution than l_1-minimization. For any m/n<1, we provide bounds of sparsity ratio for strong recovery and weak recovery respectively below which l_p-minimization succeeds with overwhelming probability. Our bound of strong recovery improves on the existing bounds when m/n is large. Regarding the recovery threshold, l_p-minimization has a higher threshold with smaller p for strong recovery; the threshold is the same for all p for sectional recovery; and l_1-minimization can outperform l_p-minimization for weak recovery. These are in contrast to traditional wisdom that l_p-minimization has better sparse recovery ability than l_1-minimization since it is closer to l_0-minimization. We provide an intuitive explanation to our findings and use numerical examples to illustrate the theoretical predictions

    Stochastic Evolution Equation Driven by Teugels Martingale and Its Optimal Control

    Full text link
    The paper is concerned with a class of stochastic evolution equations in Hilbert space with random coefficients driven by Teugel's martingales and an independent multi-dimensional Brownian motion and its optimal control problem. Here Teugels martingales are a family of pairwise strongly orthonormal martingales associated with L\'evy processes (see Nualart and Schoutens). There are three major ingredients. The first is to prove the existence and uniqueness of the solutions by continuous dependence theorem of solutions combining with the parameter extension method. The second is to establish the stochastic maximum principle and verification theorem for our optimal control problem by the classic convex variation method and dual technique. The third is to represent an example of a Cauchy problem for a controlled stochastic partial differential equation driven by Teugels martingales which our theoretical results can solve.Comment: arXiv admin note: text overlap with arXiv:1610.0491

    Maximum Principle of Forward-Backward Stochastic Differential System of Mean-Field Type with Observation Noise

    Full text link
    This paper is concerned with the partial information optimal control problem of mean-field type under partial observation, where the system is given by a controlled mean-field forward-backward stochastic differential equation with correlated noises between the system and the observation, moreover the observation coefficients may depend not only on the control process and but also on its probability distribution. Under standard assumptions on the coefficients, necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin's maximum principles are established in a unified way.Comment: arXiv admin note: substantial text overlap with arXiv:1708.0300

    Label-Guided Graph Exploration with Adjustable Ratio of Labels

    Full text link
    The graph exploration problem is to visit all the nodes of a connected graph by a mobile entity, e.g., a robot. The robot has no a priori knowledge of the topology of the graph or of its size. Cohen et al. \cite{Ilcinkas08} introduced label guided graph exploration which allows the system designer to add short labels to the graph nodes in a preprocessing stage; these labels can guide the robot in the exploration of the graph. In this paper, we address the problem of adjustable 1-bit label guided graph exploration. We focus on the labeling schemes that not only enable a robot to explore the graph but also allow the system designer to adjust the ratio of the number of different labels. This flexibility is necessary when maintaining different labels may have different costs or when the ratio is pre-specified. We present 1-bit labeling (two colors, namely black and white) schemes for this problem along with a labeling algorithm for generating the required labels. Given an nn-node graph and a rational number ρ\rho, we can design a 1-bit labeling scheme such that n/bβ‰₯ρn/b\geq \rho where bb is the number of nodes labeled black. The robot uses O(ρlog⁑Δ)O(\rho\log\Delta) bits of memory for exploring all graphs of maximum degree Ξ”\Delta. The exploration is completed in time O(nΞ”16ρ+73/ρ+Ξ”40ρ+103)O(n\Delta^{\frac{16\rho+7}{3}}/\rho+\Delta^{\frac{40\rho+10}{3}}). Moreover, our labeling scheme can work on graphs containing loops and multiple edges, while that of Cohen et al. focuses on simple graphs.Comment: 20 pages, 7 figures. Accepted by International Journal of Foundations of Computer Scienc

    Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems

    Full text link
    In this paper, an open-loop two-person non-zero sum stochastic differential game is considered for forward-backward stochastic systems. More precisely, the controlled systems are described by a fully coupled nonlinear multi- dimensional forward-backward stochastic differential equation driven by a multi-dimensional Brownian motion. one sufficient (a verification theorem) and one necessary conditions for the existence of open-loop Nash equilibrium points for the corresponding two-person non-zero sum stochastic differential game are proved. The control domain need to be convex and the admissible controls for both players are allowed to appear in both the drift and diffusion of the state equations
    • …
    corecore